Goto

Collaborating Authors

 public comment


Lost in Translation: Policymakers are not really listening to Citizen Concerns about AI

Aaronson, Susan Ariel, Moreno, Michael

arXiv.org Artificial Intelligence

The worlds people have strong opinions about artificial intelligence (AI), and they want policymakers to listen. Governments are inviting public comment on AI, but as they translate input into policy, much of what citizens say is lost. Policymakers are missing a critical opportunity to build trust in AI and its governance. This paper compares three countries, Australia, Colombia, and the United States, that invited citizens to comment on AI risks and policies. Using a landscape analysis, the authors examined how each government solicited feedback and whether that input shaped governance. Yet in none of the three cases did citizens and policymakers establish a meaningful dialogue. Governments did little to attract diverse voices or publicize calls for comment, leaving most citizens unaware or unprepared to respond. In each nation, fewer than one percent of the population participated. Moreover, officials showed limited responsiveness to the feedback they received, failing to create an effective feedback loop. The study finds a persistent gap between the promise and practice of participatory AI governance. The authors conclude that current approaches are unlikely to build trust or legitimacy in AI because policymakers are not adequately listening or responding to public concerns. They offer eight recommendations: promote AI literacy; monitor public feedback; broaden outreach; hold regular online forums; use innovative engagement methods; include underrepresented groups; respond publicly to input; and make participation easier.


Despite Protests, Elon Musk Secures Air Permit for xAI

WIRED

A local health department in Memphis has granted Elon Musk's xAI data center an air permit to continue operating the gas turbines that power the company's Grok chatbot. The permit comes amid widespread community opposition and a looming lawsuit alleging the company violated the Clean Air Act. The Shelby County Health Department released its air permit for the xAI project Wednesday, after receiving hundreds of public comments. The news was first reported by the Daily Memphian. In June, the Memphis Chamber of Commerce announced that xAI had chosen a site in Memphis to build its new supercomputer.


PUBLICSPEAK: Hearing the Public with a Probabilistic Framework in Local Government

Xu, Tianliang, Brown, Eva Maxfield, Dwyer, Dustin, Tomkins, Sabina

arXiv.org Artificial Intelligence

Local governments around the world are making consequential decisions on behalf of their constituents, and these constituents are responding with requests, advice, and assessments of their officials at public meetings. So many small meetings cannot be covered by traditional newsrooms at scale. We propose PUBLICSPEAK, a probabilistic framework which can utilize meeting structure, domain knowledge, and linguistic information to discover public remarks in local government meetings. We then use our approach to inspect the issues raised by constituents in 7 cities across the United States. We evaluate our approach on a novel dataset of local government meetings and find that PUBLICSPEAK improves over state-of-the-art by 10% on average, and by up to 40%.


LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression

Pan, Zhuoshi, Wu, Qianhui, Jiang, Huiqiang, Xia, Menglin, Luo, Xufang, Zhang, Jue, Lin, Qingwei, Rühle, Victor, Yang, Yuqing, Lin, Chin-Yew, Zhao, H. Vicky, Qiu, Lili, Zhang, Dongmei

arXiv.org Artificial Intelligence

This paper focuses on task-agnostic prompt compression for better generalizability and efficiency. Considering the redundancy in natural language, existing approaches compress prompts by removing tokens or lexical units according to their information entropy obtained from a causal language model such as LLaMa-7B. The challenge is that information entropy may be a suboptimal compression metric: (i) it only leverages unidirectional context and may fail to capture all essential information needed for prompt compression; (ii) it is not aligned with the prompt compression objective. To address these issues, we propose a data distillation procedure to derive knowledge from an LLM to compress prompts without losing crucial information, and meantime, introduce an extractive text compression dataset. We formulate prompt compression as a token classification problem to guarantee the faithfulness of the compressed prompt to the original one, and use a Transformer encoder as the base architecture to capture all essential information for prompt compression from the full bidirectional context. Our approach leads to lower latency by explicitly learning the compression objective with smaller models such as XLM-RoBERTa-large and mBERT. We evaluate our method on both in-domain and out-of-domain datasets, including MeetingBank, LongBench, ZeroScrolls, GSM8K, and BBH. Despite its small size, our model shows significant performance gains over strong baselines and demonstrates robust generalization ability across different LLMs. Additionally, our model is 3x-6x faster than existing prompt compression methods, while accelerating the end-to-end latency by 1.6x-2.9x with compression ratios of 2x-5x.


Kamala Harris announces AI Safety Institute to protect American consumers

Engadget

Just days after President Joe Biden unveiled a sweeping executive order retasking the federal government with regards to AI development, Vice President Kamala Harris announced at the UK AI Safety Summit on Tuesday a half dozen more machine learning initiatives that the administration is undertaking. "President Biden and I believe that all leaders, from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits," Harris said in her prepared remarks. "Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions," she said. The existential threats that generative AI systems present was a central theme of the summit. "To define AI safety we must consider and address the full spectrum of AI risk -- threats to humanity as a whole, threats to individuals, to our communities and to our institutions, and threats to our most vulnerable populations," she continued.


City Council votes to accept donation of controversial LAPD robot dog

Los Angeles Times

Amid lingering concerns about surveillance and safety, the Los Angeles City Council voted Tuesday to accept the donation of a nearly $280,000 dog-like robot for the police department's use. The 8-4 vote followed more than a dozen public comments urging council members to vote against the controversial device, which would be paid for with a donation from the Los Angeles Police Foundation. Council members also approved a plan to have the LAPD provide quarterly reports on deployment of the device, including where and why it was deployed, the outcome and any issues. "This item is being painted as merely an acceptance of a donation, but it really represents an expansion of the current boundaries around policing and surveillance," Councilmember Hugo Soto-Martínez said before voting no. "This is not the vision of the community that I believe Los Angeles should be."


Guidelines for use of AI in healthcare are on track, says CHAI

#artificialintelligence

The Coalition for AI Health has announced it will meet this month to finalize its consensus-driven framework and share recommendations by year-end in a progress update. CHAI convened in December to develop consensus and mutual understanding with goals to tame the drive to buy artificial intelligence and machine learning products in healthcare and arm health IT decision-makers with academic research and vetted guidelines to help them choose dependable technologies that provide value. Through October 14, CHAI is accepting public comments on its work examining testability, usability and safety at a workshop with subject-matter experts from healthcare and other industries the organization held in July. Previously, CHAI produced a sizable paper on bias, equity and fairness based on a two-day convening and accepted public comments until the end of last month. The result will be a framework, the "Guidelines for the Responsible Use of AI in Healthcare," that intentionally fosters resilient AI assurance, safety and security, according to the October 6 progress update. "Application of AI brings a tremendous benefit for patient care, but so is its potential to exacerbate inequity in healthcare," said Dr. John Halamka, president of Mayo Clinic Platform and cofounder of the coalition in the update.


LAPD panel approves new oversight of facial recognition, rejects calls to end program

Los Angeles Times

The Los Angeles Police Commission approved a policy Tuesday that set new parameters on the LAPD's use of facial recognition technology, but stopped far short of the outright ban sought by many city activists. The move followed promises by the commission to review the Los Angeles Police Department's use of photo-comparison software in September, after The Times reported that officers had used the technology -- contrary to department claims -- more than 30,000 times since 2009. The new policy restricts LAPD detectives and other trained officers to using a single software platform operated by the Los Angeles County Sheriff's Department, which only uses mugshots and is far less expansive than some third-party search platforms. It also mandates new measures for tracking the Police Department's use of the county system and its outcomes in the crime fight. Commissioners and top police executives praised the policy as a step in the right direction, saying it struck the right balance between protecting people's civil liberties and giving cops the tools they need to solve and reduce crime -- which is on the rise.


The real threat of fake voices in a time of crisis – HYPEREDGE EMBED

#artificialintelligence

Latanya Sweeney is a professor of government and technology in residence at Harvard University's Department of Government, editor-in-chief of Technology Science and the founding director of the Technology Science Initiative and the Data Privacy Lab at the Institute for Quantitative Social Science at Harvard. Max Weiss is a senior at Harvard University and the student who implemented the Deepfake Text experiment. As federal agencies take increasingly stringent actions to try to limit the spread of the novel coronavirus pandemic within the U.S., how can individual Americans and U.S. companies affected by these rules weigh in with their opinions and experiences? Because many of the new rules, such as travel restrictions and increased surveillance, require expansions of federal power beyond normal circumstances, our laws require the federal government to post these rules publicly and allow the public to contribute their comments to the proposed rules online. But are federal public comment websites -- a vital institution for American democracy -- secure in this time of crisis?


The U.S. Patent and Trademark Office Takes on Artificial Intelligence JD Supra

#artificialintelligence

If the hallmark of intelligence is problem solving, then it should be no surprise that artificial intelligence is being called on to solve complex problems that human intelligence alone cannot. Intellectual property laws exist to reward intelligence, creativity and problem solving; yet, as society adapts to a world immersed in artificial intelligence, the nation's intellectual property laws have yet to do the same. The Constitution seems to only contemplate human inventors when it says, in Article I, Section 8, Clause 8, "The Congress shall have Power … To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." The Patent Act similarly seems to limit patents to humans when it says, at 35 U.S.C. § 100(f), "The term'inventor' means the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention." Recognizing the need to adapt, the U.S. Patent and Trademark Office (PTO) recently issued notices seeking public comments on intellectual property protection related to artificial intelligence.